Like, you might think the more things you know about smart AIs, the easier it would be to build them—where does this argument break?
I mean… it doesn’t? I guess I mostly think that either what I’m working on is totally off the capabilities pathway, or if it’s somehow on one, then I don’t think whatever minor framework improvement or suggestion for a mental frame that I come up with is going to push things all that far? Which I agree is kind of a depressing thing to expect of your work, but I argue that that’s the most likely two outcomes here. Does that address that?
I mean… it doesn’t? I guess I mostly think that either what I’m working on is totally off the capabilities pathway, or if it’s somehow on one, then I don’t think whatever minor framework improvement or suggestion for a mental frame that I come up with is going to push things all that far? Which I agree is kind of a depressing thing to expect of your work, but I argue that that’s the most likely two outcomes here. Does that address that?